Search Results: "lars"

28 April 2021

Russell Coker: Links April 2021

Dr Justin Lehmiller s blog post comparing his official (academic style) and real biographies is interesting [1]. Also the rest of his blog is interesting too, he works at the Kinsey Institute so you know he s good. Media Matters has an interesting article on the spread of vaccine misinformation on Instagram [2]. John Goerzen wrote a long post summarising some of the many ways of having a decentralised Internet [3]. One problem he didn t address is how to choose between them, I could spend months of work to setup a fraction of those services. Erasmo Acosta wrote an interesting medium article Could Something as Pedestrian as the Mitochondria Unlock the Mystery of the Great Silence? [4]. I don t know enough about biology to determine how plausible this is. But it is a worry, I hope that humans will meet extra-terrestrial intelligences at some future time. Meredith Haggerty wrote an insightful Medium article about the love vs money aspects of romantic comedies [5]. Changes in viewer demographics would be one factor that makes lead actors in romantic movies significantly less wealthy in recent times. Informative article about ZIP compression and the history of compression in general [6]. Vice has an insightful article about one way of taking over SMS access of phones without affecting voice call or data access [7]. With this method the victom won t notice that they are having their sservice interfered with until it s way too late. They also explain the chain of problems in the US telecommunications industry that led to this. I wonder what s happening in this regard in other parts of the world. The clown code of ethics (8 Commandments) is interesting [8]. Sam Hartman wrote an insightful blog post about the problems with RMS and how to deal with him [9]. Also Sam Whitton has an interesting take on this [10]. Another insightful post is by Selam G about RMS long history of bad behavior and the way universities are run [11]. Cory Doctorow wrote an insightful article for Locus about free markets with a focus on DRM on audio books [12]. We need legislative changes to fix this!

30 March 2021

Emmanuel Kasper: Playing with cri-o, a container runtime built for Kubernetes

Kubernetes is moving aways from docker to alternative container engines presenting a smaller core having just the functionality needed. The two most populars alternatives are:These alternatives are meant to be used programatically via a unix domain socket, and therefore have a limited command line interface.Let's play around in a VM.Install a throwaway VM with Vagrant
apt install vagrant vagrant-libvirt
vagrant init debian/testing64
Start the VM, install dependencies
vagrant up
vagrant ssh
sudo apt update
sudo apt install --yes curl gnupg jq
Install cri-o the container engine
sudo bash
export OS=Debian_Testing VERSION=1.20

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/libcontainers.list
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key apt-key add -
apt install cri-o cri-o-runc containernetworking-plugins conntrack
Verify it is running properly
systemctl restart cri-o
systemctl status cri-o
...
Started Container Runtime Interface for OCI (CRI-O).
Say hello to cri-o via its unix domain socket
curl --silent  --unix-socket /var/run/crio/crio.sock http://localhost/info   jq 

"storage_driver": "overlay",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings":
"uids": [

"container_id": 0,
"host_id": 0,
"size": 4294967295

],
"gids": [

"container_id": 0,
"host_id": 0,
"size": 4294967295

]


Install crictl, a Kubernetes debugging tool for containers
wget --directory-prefix=/tmp https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
tar -xaf /tmp/crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/sbin/
chmod +x /usr/local/sbin/crictl

crictl info

"status":
"conditions": [

"type": "RuntimeReady",
"status": true,
"reason": "",
"message": ""
,

"type": "NetworkReady",
"status": true,
"reason": "",
"message": ""

]



From there on you can create a container following the examples in https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

22 February 2021

John Goerzen: Recovering Our Lost Free Will Online: Tools and Techniques That Are Available Now

As I ve been thinking and writing about privacy and decentralization lately, I had a conversation with a colleague this week, and he commented about how loss of privacy is related to loss of agency: that is, loss of our ability to make our own choices, pursue our own interests, and be master of our own attention. In terms of telecommunications, we have never really been free, though in terms of Internet and its predecessors, there have been times where we had a lot more choice. Many are too young to remember this, and for others, that era is a distant memory. The irony is that our present moment is one of enormous consolidation of power, and yet also one of a proliferation of technologies that let us wrest back some of that power. In this post, I hope to enlighten or remind us of some of the choices we have lost and also talk about the ways in which we can choose to regain them, already, right now. I will talk about the possibilities, the big dreams that are possible now, and then go into more detail about the solutions. The Problems & Possibilities The limitations of online We make the assumption that we must be online to exchange data. This is reinforced by many modern protocols; Twitter clients, for instance, don t tend to let you make posts by relaying them through disconnected devices. What would it be like if you could fully participate in global communities without a constant Internet connection? If you could share photos with your friends, read the news, read your email, etc. even if you don t have a connection at present? Even if the device you use to do that never has a connection, but can route messages via other devices that do? Would it surprise you to learn that this was once the case? Back in the days of UUCP, much email and Usenet news a global discussion forum that didn t require an Internet connection was relayed via occasional calls over phone lines. This technology remains with us, and has even improved. Sadly, many modern protocols make no effort in this regard. Some email clients will let you compose messages offline to send when you get online later, but the assumption always is that you will be connected to an IP network again soon. NNCP, on the other hand, lets you relay messages over TCP, a radio, a satellite, or a USB stick. Email and Usenet, since they were designed in an era where store-and-forward was valued, can actually still be used in an entirely offline fashion (without ever touching an IP-based network). All it takes is for someone to care to make it happen. You can even still do it over UUCP if you like. The physical and data link layers Many of us just accept that we communicate in a few ways: Wifi for short distances, and then cable modems or DSL for our local Internet connection, and then many people are fuzzy about what happens after that. Or, alternatively, we have 4G phones that are the local Internet connection, and the same fuzzy things happen after. Think about this for a moment. Which of these do you control in any way? Sometimes just wifi, sometimes maybe you have choices of local Internet providers. After that, your traffic is handled by enormous infrastructure companies. There is choice here. People in ham radio have been communicating digitally over long distances without the support of the traditional Internet for decades, but the technology to do this is now more accessible to anyone. Long-distance radio has had tremendous innovation in the last decade; cheap radios can now communicate over several miles/km without any other infrastructure at all. We all carry around radios (Wifi and Bluetooth) in our pockets that don t have to be used as mere access points to the Internet or as drivers of headphones, but can also form their own networks directly (Briar). Meshtastic is an example; it s an instant messenger that can form a mesh over many miles/km and requires no IP infrastructure at all. Briar is similar. XBee radios form a mesh in hardware, allowing peers to reach each other (also over many miles/km) with a serial or framed protocol. Loss of peer-to-peer Back in the late 90s, I worked at a university. I had a 386 on my desk for a workstation not a powerful computer even then. But I put the boa webserver on it and could just serve pages on the Internet. I didn t have to get permission. Didn t have to pay a hosting provider. I could just DO it. And of course that is because the university had no firewall and no NAT. Every PC at the university was a full participant on the Internet as much as the servers at Microsoft or DEC. All I needed was a DNS entry. I could run my own SMTP server if I wanted, run a web or Gopher server, and that was that. There are many reasons why this changed. Nowadays most residential ISPs will block SMTP for their customers, and if they didn t, others would; large email providers have decided not to federate with IPs in residential address spaces. Most people have difficulty even getting a static IP address in the first place. Many are behind firewalls, NATs, or both, meaning that incoming connections of any kind are problematic. Do you see what that means? It has weakened the whole point of the Internet being a network of peers. While IP still acts that way, as a practical matter, there are clients that are prevented from being servers by administrative policy they have no control over. Imagine if you, a person with an Internet connection to your laptop or phone, could just decide to host a website, or a forum on it. For moderate levels of load, they are certainly capable of this. The only thing in the way is the network management policies you can t control. Elaborate technologies exist to try to bridge this divide, and some, like Tor or cjdns, can work quite well. More on this below. Expense of running something popular Related to the loss of peer-to-peer infrastructure is the very high cost of hosting something popular. Do you want to share videos with lots of people? That almost certainly is going to require expensive equipment and bandwidth. There is a reason that there are only a small handful of popular video streaming sites online. It requires a ton of money to host videos at scale. What if it didn t? What if you could achieve economies of scale so much that you, an individual, could compete with the likes of YouTube? You wouldn t necessarily have to run ads to support the service. You wouldn t have to have billions of dollars or billions of viewers just to make it work. This technology exists right now. Of course many of you are aware of how Bittorrent leverages the swarm for files. But projects like IPFS, Dat, and Peertube have taken this many steps further to integrate it into a global ecosystem. And, at least in the case of Peertube, this is a thing that works right now in any browser already! Application-level walled gardens I was recently startled at how much excitement there was when Github introduced dark mode . Yes, Github now offers two colors on its interface. Already back in the 80s and 90s, many DOS programs had more options than that. Git is a decentralized protocol, but Github has managed to make it centralized. Email is a decentralized protocol pick your own provider, and they all communicate but Facebook and Twitter aren t. You can t just pick your provider for Facebook. It s Facebook or nothing. There is a profit motive in locking others out; these networks want to keep you using their platforms because their real customers are advertisers, and they want to keep showing you ads. Is it possible to have a world where you get to pick your own app for sharing photos, and it works even if your parents use a different one? Yes, yes it is. Mastodon and the Fediverse are fantastic examples for social media. Pixelfed is specifically designed for photos, Mastodon for short-form communication, there s Pleroma for more long-form communication, and they all work together. You can use Mastodon to read Pleroma content or look at Pixelfed photos, and there are many (free) providers of each. Freedom from manipulation I recently wrote about the dangers of the attention economy, so I won t go into a lot of detail here. Fundamentally, you are not the customer of Facebook or Google; advertisers are. They optimize their site to keep you on it as much as possible so that they can show you as many ads as possible which makes them as much money as possible. Ads, of course, are fundamentally seeking to manipulate your behavior ( buy this product ). By lowering the cost of running services, we can give a huge boost to hobbyists and nonprofits that want to do so without an ultimate profit motive. For-profit companies benefit also, with a dramatically reduced cost structure that frees them to pursue their mission instead of so many ads. Freedom from snooping (privacy and anonymity) These days, it s not just government snooping that people think about. It s data stolen by malware, spies at corporations (whether human or algorithmic), and even things like basic privacy of one s own security footage. Here the picture is improving; encryption in transit, at least at a basic level, has become much more common with TLS being a standard these days. Sadly, end-to-end encryption (E2EE) is not nearly as much, perhaps because corporations have a profit motive to have access to your plaintext and metadata. Closely related to privacy is anonymity: that is, being able to do things in an anonymous fashion. The two are not necessarily equal: you could send an encrypted message but reveal who the correspondents are, as with email; or, you could send a plaintext message over a Tor exit node that hides who the correspondents are. It is sometimes difficult to achieve both. Nevertheless, numerous answers exist here that tackle one or both problems, from the Signal messenger to Tor. Solutions That Exist Today Let s dive in to some of the things that exist today. One concept you ll see in many of these is integrated encryption with public keys used for addressing. In other words, your public key is akin to an IP address (and in some cases, is literally your IP address.) Data link and networking technologies (some including P2P) P2P Infrastructure While some of the technologies above, such as cjdns, explicitly facitilitate peer-to-peer communication, there are some other application-level technologies to look at. Instant Messengers and Chat I won t go into a lot of detail here since I recently wrote a roundup of secure mesh messengers and also a followup article about Signal and some hidden drawbacks of P2P. Please refer to those articles for some interesting things that are happening in this space. Matrix is a distributed IM platform similar in concept to Slack or IRC, but globally distributed in a mesh. It supports optional E2EE. Social Media I wrote recently about how to join the Fediverse, which covered joining Mastodon, a federeated, decentralized social network. Mastodon is the largest of these, with several million users, and is something of a much nicer version of Twitter. Mastodon is also part of what is known as the Fediverse , which are applications that are loosely joined together by their support of the ActivityPub protocol. Other popular Fediverse applications include Pixelfed (similar to Instagram) and Peertube for sharing video. Peertube is particularly interesting in that it supports Webtorrent for efficiently distributing popular videos. Webtorrent is akin to Bittorrent running efficiently inside your browser. Concluding Remarks Part of my goal with this is encouraging people to dream big, to ask questions like: What could you do if offline were easy? What is possible if you have freedom in the physical and data link layers? Dream big. We re so used to thinking that it s quite difficult for two devices on the Internet to talk to each other. What would be possible if this were actually quite easy? The assumption that costs rise dramatically as popularity increases is also baked into our thought processes. What if that weren t the case could you take on Youtube from your garage? Would lowering barriers to entry lower the ad economy and let nonprofits have more equal footing with large corporations? We have so many walled gardens, from Github to Facebook, that we almost forget it doesn t have to be that way. So having asked these questions, my secondary point is to suggest that these aren t pie-in-the-sky notions. These possibilites are with us right now. You ll notice from this list that virtually every one of these technologies is ad-free at its heart (though some would be capable of serving ads). They give you back your attention. Many preserve privacy, anonymity, or both. Many dramatically improve your freedom of association and communication. Technologies like IPFS and Bittorrent ease the burden of running something popular. Some are quite easy to use (Mastodon or Peertube) while others are much more complex (libp2p or the lower-level mesh network systems). Clearly there is still room for improvement in many areas. But my fundamental point is this: good technology is here, right now. Technical people can vote with their feet and wallets and start using it. Early adopters will help guide the way for the next set of improvements. Join us!

23 December 2020

John Goerzen: How & Why To Use Airgapped Backups

A good backup strategy needs to consider various threats to the integrity of data. For instance: It s that last one that is of particular interest today. A lot of backup strategies are such that if a user (or administrator) has their local account or network compromised, their backups could very well be destroyed as well. For instance, do you ssh from the account being backed up to the system holding the backups? Or rsync using a keypair stored on it? Or access S3 buckets, etc? It is trivially easy in many of these schemes to totally ruin cloud-based backups, or even some other schemes. rsync can be run with delete (and often is, to prune remotes), S3 buckets can be deleted, etc. And even if you try to lock down an over-network backup to be append-only, still there are vectors for attack (ssh credentials, OpenSSL bugs, etc). In this post, I try to explore how we can protect against them and still retain some modern conveniences. A backup scheme also needs to make a balance between: My story so far About 20 years ago, I had an Exabyte tape drive, with the amazing capacity of 7GB per tape! Eventually as disk prices fell, I had external disks plugged in to a server, and would periodically rotate them offsite. I ve also had various combinations of partial or complete offsite copies over the Internet as well. I have around 6TB of data to back up (after compression), a figure that is growing somewhat rapidly as I digitize some old family recordings and videos. Since I last wrote about backups 5 years ago, my scheme has been largely unchanged; at present I use ZFS for local and to-disk backups and borg for the copies over the Internet. Let s take a look at some options that could make this better. Tape The original airgapped backup. You back up to a tape, then you take the (fairly cheap) tape out of the drive and put in another one. In cost per GB, tape is probably the cheapest medium out there. But of course it has its drawbacks. Let s start with cost. To get a drive that can handle capacities of what I d be needing, at least LTO-6 (2.5TB per tape) would be needed, if not LTO-7 (6TB). New, these drives cost several thousand dollars, plus they need LVD SCSI or Fibre Channel cards. You re not going to be hanging one off a Raspberry Pi; these things need a real server with enterprise-style connectivity. If you re particularly lucky, you might find an LTO-6 drive for as low as $500 on eBay. Then there are tapes. A 10-pack of LTO-6 tapes runs more than $200, and provides a total capacity of 25TB sufficient for these needs (note that, of course, you need to have at least double the actual space of the data, to account for multiple full backups in a set). A 5-pack of LTO-7 tapes is a little more expensive, while providing more storage. So all-in, this is going to be in the best possible scenario nearly $1000, and possibly a lot more. For a large company with many TB of storage, the initial costs can be defrayed due to the cheaper media, but for a home user, not so much. Consider that 8TB hard drives can be found for $150 $200. A pair of them (for redundancy) would run $300-400, and then you have all the other benefits of disk (quicker access, etc.) Plus they can be driven by something as cheap as a Raspberry Pi. Fancier tape setups involve auto-changers, but then you re not really airgapped, are you? (If you leave all your tapes in the changer, they can generally be selected and overwritten, barring things like hardware WORM). As useful as tape is, for this project, it would simply be way more expensive than disk-based options. Fundamentals of disk-based airgapping The fundamental thing we need to address with disk-based airgapping is that the machines being backed up have no real-time contact with the backup storage system. This rules out most solutions out there, that want to sync by comparing local state with remote state. If one is willing to throw storage efficiency out the window maybe practical for very small data sets one could just send a full backup daily. But in reality, what is more likely needed is a way to store a local proxy for the remote state. Then a runner device (a USB stick, disk, etc) could be plugged into the network, filled with queued data, then plugged into the backup system to have the data dequeued and processed. Some may be tempted to short-circuit this and just plug external disks into a backup system. I ve done that for a long time. This is, however, a risk, because it makes those disks vulnerable to whatever may be attacking the local system (anything from lightning to ransomware). ZFS ZFS is, it should be no surprise, particularly well suited for this. zfs send/receive can send an incremental stream that represents a delta between two checkpoints (snapshots or bookmarks) on a filesystem. It can do this very efficiently, much more so than walking an entire filesystem tree. Additionally, with the recent addition of ZFS crypto to ZFS on Linux, the replication stream can optionally reflect the encrypted data. Yes, as long as you don t need to mount them, you can mostly work with ZFS datasets on an encrypted basis, and can directly tell zfs send to just send the encrypted data instead of the decrypted data. The downside of ZFS is the resource requirements at the destination, which in terms of RAM are higher than most of the older Raspberry Pi-style devices. Still, one could perhaps just save off zfs send streams and restore them later if need be, but that implies a periodic resend of a full stream, an inefficient operation. dedpulicating software such as borg could be used on those streams (though with less effectiveness if they re encrypted). Tar Perhaps surprisingly, tar in listed incremental mode can solve this problem for non-ZFS users. It will keep a local cache of the state of the filesystem as of the time of the last run of tar, and can generate new tarballs that reflect the changes since the previous run (even deletions). This can achieve a similar result to the ZFS send/receive, though in a much less elegant way. Bacula / Bareos Bacula (and its fork Bareos) both have support for a FIFO destination. Theoretically this could be used to queue of data for transfer to the airgapped machine. This support is very poorly documented in both and is rumored to have bitrotted, however. rdiff and xdelta rdiff and xdelta can be used as sort of a non-real-time rsync, at least on a per-file basis. Theoretically, one could generate a full backup (with tar, ZFS send, or whatever), take an rdiff signature, and send over the file while keeping the signature. On the next run, another full backup is piped into rdiff, and on the basis of the signature file of the old and the new data, it produces a binary patch that can be queued for the backup target to update its stored copy of the file. This leaves history preservation as an exercise to be undertaken on the backup target. It may not necessarily be easy and may not be efficient. rsync batches rsync can be used to compute a delta between two directory trees and express this as a single-file batch that can be processed by a remote rsync. Unfortunately this implies the sender must always keep an old tree around (barring a solution such as ZFS snapshots) in order to compute the delta, and of course it still implies the need for history processing on the remote. Getting the Data There OK, so you ve got an airgapped system, some sort of runner device for your sneakernet (USB stick, hard drive, etc). Now what? Obviously you could just copy data on the runner and move it back off at the backup target. But a tool like NNCP (sort of a modernized UUCP) offer a lot of help in automating the process, returning error reports, etc. NNCP can be used online over TCP, over reliable serial links, over ssh, with offline onion routing via intermediaries or directly, etc. Imagine having an airgapped machine at a different location you go to frequently (workplace, friend, etc). Before leaving, you put a USB stick in your pocket. When you get there, you pop it in. It s despooled and processed while you want, and return emails or whatever are queued up to be sent when you get back home. Not bad, eh? Future installment I m going to try some of these approaches and report back on my experiences in the next few weeks.

28 November 2020

Mark Brown: Book club: Rust after the honeymoon

Earlier this month Daniel, Lars and myself got together to discuss Bryan Cantrill s article Rust after the honeymoon. This is an overview of what keeps him enjoying working with Rust after having used it for an extended period of time for low level systems work at Oxide, we were particularly interested to read a perspective from someone who was both very experienced in general and had been working with the language for a while. While I have no experience with Rust both Lars and Daniel have been using it for a while and greatly enjoy it. One of the first areas we discussed was data bearing enums these have been very important to Bryan. In keeping with a pattern we all noted these take a construct that s relatively commonly implemented by hand in C (or skipped as too much effort, as Lars found) and provides direct support in the language for it. For both Daniel and Lars this has been key to their enjoyment of Rust, it makes things that are good practice or common idioms in C and C++ into first class language features which makes them more robust and allows them to fade into the background in a way they can t when done by hand. Daniel was also surprised by some omissions, some small such as the ? operator but others much more substantial the standout one being editions. These aim to address the problems seen with version transitions in other languages like Python, allowing individual parts of a Rust program to adopt potentially incompatible language features while remaining interoperability with older editions of the language rather than requiring the entire program to be upgraded en masse. This helps Rust move forwards with less need to maintain strict source level compatibility, allowing much more rapid evolution and helping deal with any issues that are found. Lars expressed the results of this very clearly, saying that while lots of languages offer a 20%/80% solution which does very well in specific problem domains but has issues for some applications Rust is much more able to move towards a much more general applicability by addressing problems and omissions as they are understood. This distracted us a bit from the actual content of the article and we had an interesting discussion of the issues with handling OS differences in filenames portably. Rather than mapping filenames onto a standard type within the language and then have to map back out into whatever representation the system actually uses Rust has an explicit type for filenames which must be explicitly converted on those occasions when it s required, meaning that a lot of file handling never needs to worry about anything except the OS native format and doesn t run into surprises. This is in keeping with Rust s general approach to interfacing with things that can t be represented in its abstractions, rather than hide things it keeps track of where things that might break the assumptions it makes are and requires the programmer to acknowledge and handle them explicitly. Both Lars and Daniel said that this made them feel a lot more confident in the code that they were writing and that they had a good handle on where complexity might lie, Lars noted that Rust is the first languages he s felt comfortable writing multi threaded code in. We all agreed that the effect here was more about having idioms which tend to be robust and both encourage writing things well and gives readers tools to help know where particular attention is required no tooling can avoid problems entirely. This was definitely an interesting discussion for me with my limited familiarity with Rust, hopefully Daniel and Lars also got a lot out of it!

23 November 2020

Shirish Agarwal: White Hat Senior and Education

I had been thinking of doing a blog post on RCEP which China signed with 14 countries a week and a day back but this new story has broken and is being viraled a bit on the interwebs, especially twitter and is pretty much in our domain so thought would be better to do a blog post about it. Also, there is quite a lot packed so quite a bit of unpacking to do.

Whitehat, Greyhat and Blackhat For those of you who may not, there are actually three terms especially in computer science that one comes across. Those are white hats, grey hats and black hats. Now clinically white hats are like the fiery angels or the good guys who basically take permissions to try and find out weakness in an application, program, website, organization and so on and so forth. A somewhat dated reference to hacker could be Sandra Bullock (The Net 1995) , Sneakers (1992), Live Free or Die Hard (2007) . Of the three one could argue that Sandra was actually into viruses which are part of computer security but still she showed some bad-ass skills, but then that is what actors are paid to do  Sneakers was much more interesting for me because in that you got the best key which can unlock any lock, something like quantum computing is supposed to do. One could equate both the first movies in either as a White hat or a Grey hat . A Grey hat is more flexible in his/her moral values, and they are plenty of such people. For e.g. Julius Assange could be described as a Grey hat, but as you can see and understand those are moral issues.

A black hat on the other hand is one who does things for profit even if it harms the others. The easiest fictitious examples are all Die Hard series, all of them except the 4th one, all had bad guys or black hats. The 4th also had but is the odd one out as it had Matthew Farell (Justin Long) as a Grey hat hacker. In real life Kevin Mitnick, Kevin Poulsen, Robert Tappan Morris, George Hotz, Gary McKinnon are some examples of hackers, most of whom were black hats, most of them reformed into white hats and security specialists. There are many other groups and names but that perhaps is best for another day altogether. Now why am I sharing this. Because in all of the above, the people who are using and working with the systems have better than average understanding of systems and they arguably would be better than most people at securing their networks, systems etc. but as we shall see in this case there has been lots of issues in the company.

WhiteHat Jr. and 300 Million Dollars Before I start this, I would like to share that for me this suit in many ways seems to be similar to the suit filed against Krishnaraj Rao . Although the difference is that Krishnaraj Rao s case/suit is that it was in real estate while this one is in education although many things are similar to those cases but also differ in some obvious ways. For e.g. in the suit against Krishnaraj Rao, the plaintiff s first approached the High Court and then the Supreme Court. Of course Krishnaraj Rao won in the High Court and then in the SC plaintiff s agreed to Krishnaraj Rao s demands as they knew they could not win in SC. In that case, a compromise was reached by the plaintiff just before judgement was to be delivered. In this case, the plaintiff have directly approached the Delhi High Court. The charges against Mr. Poonia (the defendant in this case) are very much similar to those which were made in Krishnaraj Rao s suit hence won t be going into those details. They have claimed defamation and filed a 20 crore suit. The idea is basically to silence any whistle-blowers.

Fictional Character Wolf Gupta The first issue in this case or perhaps one of the most famous or infamous character is an unknown. While he has been reportedly hired by Google India, BJYU, Chandigarh. This has been reported by Yahoo News. I did a cursory search on LinkedIn to see if there indeed is a wolf gupta but wasn t able to find any person with such a name. I am not even talking the amount of money/salary the fictitious gentleman is supposed to have got and the various variations on the salary figures at different times and the different ads.

If I wanted to, I could have asked few of the kind souls whom I know are working in Google to see if they can find such a person using their own credentials but it probably would have been a waste of time. When you show a LinkedIn profile in your social media, it should come up in the results, in this case it doesn t. I also tried to find out if somehow BJYU was a partner to Google and came up empty there as well. There is another story done by Kan India but as I m not a subscriber, I don t know what they have written but the beginning of the story itself does not bode well. While I can understand marketing, there is a line between marketing something and being misleading. At least to me, all of the references shared seems misleading at least to me.

Taking down dissent One of the big no-nos at least from what I perceive, you cannot and should not take down dissent or critique. Indians, like most people elsewhere around the world, critique and criticize day and night. Social media like twitter, mastodon and many others would not exist in the place if criticisms are not there. In fact, one could argue that Twitter and most social media is used to drive engagements to a person, brand etc. It is even an official policy in Twitter. Now you can t drive engagements without also being open to critique and this is true of all the web, including of WordPress and me  . What has been happening is that whitehatjr with help of bjyu have been taking out content of people citing copyright violation which seems laughable. When citizens critique anything, we are obviously going to take the name of the product otherwise people would have to start using new names similar to how Tom Riddle was known as Dark Lord , Voldemort and He who shall not be named . There have been quite a few takedowns, I just provide one for reference, the rest of the takedowns would probably come in the ongoing suit/case.
Whitehat Jr. ad showing investors fighting

Now a brief synopsis of what the ad. is about. The ad is about a kid named Chintu who makes an app. The app. Is so good that investors come to his house and right in the lawn and start fighting each other. The parents are enjoying looking at the fight and to add to the whole thing there is also a nosy neighbor who has his own observations. Simply speaking, it is a juvenile ad but it works as most parents in India, as elsewhere are insecure.
Jihan critiquing the whitehatjr ad
Before starting, let me assure that I asked Jihan s parents if it s ok to share his ad on my blog and they agreed. What he has done is broken down the ad and showed how juvenile the ad is and using logic and humor as a template for the same. He does make sure to state that he does not know how the product is as he hasn t used it. His critique was about the ad and not the product as he hasn t used that.

The Website If you look at the website, sadly, most of the site only talks about itself rather than giving examples that people can look in detail. For e.g. they say they have few apps. on Google play-store but no link to confirm the same. The same is true of quite a few other things. In another ad a Paralympic star says don t get into sports and get into coding. Which athlete in their right mind would say that? And it isn t that we (India) are brimming with athletes at the international level. In the last outing which was had in 2016, India sent a stunning 117 athletes but that was an exception as we had the women s hockey squad which was of 16 women, and even then they were overshadowed in numbers by the bureaucratic and support staff. There was criticism about the staff bit but that is probably a story for another date. Most of the site doesn t really give much value and the point seems to be driving sales to their courses. This is pressurizing small kids as well as teenagers and better who are in the second and third year science-engineering whose parents don t get that it is advertising and it is fake and think that their kids are incompetent. So this pressurizes both small kids as well as those who are learning, doing in whatever college or educational institution . The teenagers more often than not are unable to tell/share with them that this is advertising and fake. Also most of us have been on a a good diet of ads. Fair and lovely still sells even though we know it doesn t work. This does remind me of a similar fake academy which used very much similar symptoms and now nobody remembers them today. There used to be an academy called Wings Academy or some similar name. They used to advertise that you come to us and we will make you into a pilot or an airhostess and it was only much later that it was found out that most kids were doing laundry work in hotels and other such work. Many had taken loans, went bankrupt and even committed suicide because they were unable to pay off the loans due to the dreams given by the company and the harsh realities that awaited them. They were sued in court but dunno what happened but soon they were off the radar so we never came to know what happened to those million of kids whose life dreams were shattered.

Security Now comes the security part. They have alleged that Mr. Poonia broke into their systems. While this may be true, what I find funny is that with the name Whitehat, how can they justify it? If you are saying you are white hat you are supposed to be much better than this. And while I have not tried to penetrate their systems, I did find it laughable that the site is using an expired https:// certificate. I could have tried further to figure out the systems but I chose not to. How they could not have an automated script to get the certificate fixed is beyond me, this is known as certificate outage and is very well understood in the industry. There are tools like Let s Encrypt and Certbot (both EFF) and many others. But that is their concern, not mine.

Comparison A similar offering would be unacademy but as can be seen they neither try to push you in any way and nor do they make any ridiculous claims. In fact how genuine unacademy is can be gauged from the fact that many of its learning resources are available to people to see on YT and if they have tools they can also download it. Now, does this mean that every educational website should have their content for free, of course not. But when a channel has 80% 90% of it YT content as ads and testimonials then they surely should give a reason to pause both for parents and students alike. But if parents had done that much research, then things would not be where they are now.

Allegations Just to complete, there are allegations by Mr. Poonia with some screenshots which show the company has been doing a lot of bad things. For e.g. they were harassing an employee at night 2 a.m. who was frustrated and working in the company at the time. Many of the company staff routinely made sexist and offensive, sexual abusive remarks privately between themselves for prospective women who came to interview via webcam (due to the pandemic). There also seems to be a bit of porn on the web/mobile server of the company as well. There also have been allegations that while the company says refund is done next day, many parents who have demanded those refunds have not got it. Now while Mr. Poonia has shared some quotations of the staff while hiding the identities of both the victims and the perpetrators, the language being used in itself tells a lot. I am in two minds whether to share those photos or not hence atm choosing not to. Poonia has also contended that all teachers do not know programming, and they are given scripts to share. There have been some people who did share that experience with him
Suruchi Sethi
From the company s side they are alleging he has hacked the company servers and would probably be using the Fruit of the poisonous tree argument which we have seen have been used in many arguments.

Conclusion Now that lies in the eyes of the Court whether the single bench chooses the literal meaning or use the spirit of the law or the genuine concerns of the people concerned. While in today s hearing while the company asked for a complete sweeping injunction they were unable to get it. Whatever may happen, we may hope to see some fireworks in the second hearing which is slated to be on 6.01.2021 where all of this plays out. Till later.

21 October 2020

Bits from Debian: Debian donation for Peertube development

The Debian project is happy to announce a donation of 10,000 to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming. This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us. We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future. Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform. The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen. This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone. The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia.

Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do. This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.
Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical? A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source base layer of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society s lifelines, and CIP aims to contribute to and support these important pillars of modern society.
Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today? A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.
Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals? A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.
Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific success stories that the CIP is particularly proud of? A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems. In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.
Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency? A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.
Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims? A: Collaboration with other projects is an essential part of how CIP operates we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an upstream first policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.
Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context? A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.
Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look? A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

12 October 2020

Russ Allbery: Review: Hand to Mouth

Review: Hand to Mouth, by Linda Tirado
Publisher: G.P. Putnam's Sons
Copyright: October 2014
ISBN: 0-698-17528-X
Format: Kindle
Pages: 194
The first time Linda Tirado came to the viral attention of the Internet was in 2013 when she responded to a forum question: "Why do poor people do things that seem so self-destructive?" Here are some excerpts from her virally popular five-page response, which is included in the first chapter:
I know how to cook. I had to take Home Ec. to graduate high school. Most people on my level didn't. Broccoli is intimidating. You have to have a working stove, and pots, and spices, and you'll have to do the dishes no matter how tired you are or they'll attract bugs. It is a huge new skill for a lot of people. That's not great, but it's true. And if you fuck it up, you could make your family sick. We have learned not to try too hard to be middle class. It never works out well and always makes you feel worse for having tried and failed yet again. Better not to try. It makes more sense to get food that you know will be palatable and cheap and that keeps well. Junk food is a pleasure that we are allowed to have; why would we give that up? We have very few of them.
and
I smoke. It's expensive. It's also the best option. You see, I am always, always exhausted. It's a stimulant. When I am too tired to walk one more step, I can smoke and go for another hour. When I am enraged and beaten down and incapable of accomplishing one more thing, I can smoke and I feel a little better, just for a minute. It is the only relaxation I am allowed. It is not a good decision, but it is the only one that I have access to. It is the only thing I have found that keeps me from collapsing or exploding.
This book is an expansion on that essay. It's an entry in a growing genre of examinations of what it means to be poor in the United States in the 21st century. Unlike most of those examinations, it isn't written by an outsider performing essentially anthropological field work. It's one of the rare books written by someone who is herself poor and had the combination of skill and viral fame required to get an opportunity to talk about it in her own words.
I haven't had it worse than anyone else, and actually, that's kind of the point. This is just what life is for roughly a third of the country. We all handle it in our own ways, but we all work in the same jobs, live in the same places, feel the same sense of never quite catching up. We're not any happier about the exploding welfare rolls than anyone else is, believe me. It's not like everyone grows up and dreams of working two essentially meaningless part-time jobs while collecting food stamps. It's just that there aren't many other options for a lot of people.
I didn't find this book back in 2014 when it was published. I found it in 2020 during Tirado's second round of Internet fame: when the police shot out her eye with "non-lethal" rounds while she was covering the George Floyd protests as a photojournalist. In characteristic fashion, she subsequently reached out to the other people who had been blinded by the police, used her temporary fame to organize crowdfunded support for others, and is planning on having "try again" tattooed over the scar. That will give you a feel for the style of this book. Tirado is blunt, opinionated, honest, and full speed ahead. It feels weird to call this book delightful since it's fundamentally about the degree to which the United States is failing a huge group of its citizens and making their lives miserable, but there is something so refreshing and clear-headed about Tirado's willingness to tell you the straight truth about her life. It's empathy delivered with the subtlety of a brick, but also with about as much self-pity as a brick. Tirado is not interested in making you feel sorry for her; she's interested in you paying attention.
I don't get much of my own time, and I am vicious about protecting it. For the most part, I am paid to pretend that I am inhuman, paid to cater to both the reasonable and unreasonable demands of the general public. So when I'm off work, feel free to go fuck yourself. The times that I am off work, awake, and not taking care of life's details are few and far between. It's the only time I have any autonomy. I do not choose to waste that precious time worrying about how you feel. Worrying about you is something they pay me for; I don't work for free.
If you've read other books on this topic (Emily Guendelsberger's On the Clock is still the best of those I've read), you probably won't get many new facts from Hand to Mouth. I think this book is less important for the policy specifics than it is for who is writing it (someone who is living that life and can be honest about it) and the depth of emotional specifics that Tirado brings to the description. If you have never been poor, you will learn the details of what life is like, but more significantly you'll get a feel for how Tirado feels about it, and while this is one individual perspective (as Tirado stresses, including the fact that, as a white person, there are other aspects of poverty she's not experienced), I think that perspective is incredibly valuable. That said, Hand to Mouth provides even more reinforcement of the importance of universal medical care, the absurdity of not including dental care in even some of the more progressive policy proposals, and the difficulties in the way of universal medical care even if we solve the basic coverage problem. Tirado has significant dental problems due to unrepaired damage from a car accident, and her account reinforces my belief that we woefully underestimate how important good dental care is to quality of life. But providing universal insurance or access is only the start of the problem.
There is a price point for good health in America, and I have rarely been able to meet it. I choose not to pursue treatment if it will cost me more than it will gain me, and my cost-benefit is done in more than dollars. I have to think of whether I can afford any potential treatment emotionally, financially, and timewise. I have to sort out whether I can afford to change my life enough to make any treatment worth it I've been told by more than one therapist that I'd be fine if I simply reduced the amount of stress in my life. It's true, albeit unhelpful. Doctors are fans of telling you to sleep and eat properly, as though that were a thing one can simply do.
That excerpt also illustrates one of the best qualities of this book. So much writing about "the poor" treats them as an abstract problem that the implicitly not-poor audience needs to solve, and this leads rather directly to the endless moralizing as "we" attempt to solve that problem by telling poor people what they need to do. Tirado is unremitting in fighting for her own agency. She has a shitty set of options, but within those options she makes her own decisions. She wants better options and more space in which to choose them, which I think is a much more productive way to frame the moral argument than the endless hand-wringing over how to help "those poor people." This is so much of why I support universal basic income. Just give people money. It's not all of the solution UBI doesn't solve the problem of universal medical care, and we desperately need to find a way to make work less awful but it's the most effective thing we can do immediately. Poor people are, if anything, much better at making consequential financial decisions than rich people because they have so much more practice. Bad decisions are less often due to bad decision-making than bad options and the balancing of objectives that those of us who are not poor don't understand. Hand to Mouth is short, clear, refreshing, bracing, and, as you might have noticed, very quotable. I think there are other books in this genre that offer more breadth or policy insight, but none that have the same feel of someone cutting through the bullshit of lazy beliefs and laying down some truth. If any of the above excerpts sound like the sort of book you would enjoy reading, pick this one up. Rating: 8 out of 10

11 October 2020

Mark Brown: Book club: JSON Web Tokens

This month for our book club Daniel, Lars, Vince and I read Hardcoded secrets, unverified tokens, and other common JWT mistakes which wasn t quite what we d thought when it was picked. We had been expecting an analysis of JSON web tokens themselves as several us had been working in the area and had noticed various talk about problems with the standard but instead the article is more a discussion of the use of semgrep to find and fix common issues, using issues with JWT as examples. We therefore started off with a bit of a discussion of JWT, concluding that the underlying specification was basically fine given the problem to be solved but that as with any security related technology there were plenty of potential pitfalls in implementation and that sadly many of the libraries implementing the specification make it far too easy to make mistakes such as those covered by the article through their interface design and defaults. For example interfaces that allow interchangable use of public keys and shared keys are error prone, as is is making it easy to access unauthenticated data from tokens without clearly flagging that it is unauthenticated. We agreed that the wide range of JWT implementations available and successfully interoperating with each other is a sign that JWT is getting something right in providing a specification that is clear and implementable. Moving on to semgrep we were all very enthusiastic about the technology, language independent semantic matching with a good set of rules for a range of languages available. Those of us who work on the Linux kernel were familiar with semantic matching and patching as implemented by Coccinelle which has been used quite successfully for years to both avoiding bad patterns in code and making tree wide changes, as demonstrated by the article it is a powerful technique. We were impressed by the multi-language support and approachability of semgrep, with tools like their web editor seeming particularly helpful for people getting started with the tool, especially in conjunction with the wide range of examples available. This was a good discussion (including the tangential discussions of quality problems we had all faced dealing with software over the years, depressing though those can be) and semgrep was a great tool to learn about, I know I m going to be using it for some of my projects.

21 September 2020

Russ Allbery: Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom
Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217
Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities. I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded. I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures. One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice. The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor. However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them. The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation. This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them. The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment. This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students. It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them. Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites). It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn. Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so. This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check. There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork. As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way. I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools. Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation. Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed. Rating: 8 out of 10

10 September 2020

Daniel Silverstone: Broccoli Sync Conversation

Broccoli Sync Conversation A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data. Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile. Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense. We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much. I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting. Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems. Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients. We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency. Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old. We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised. We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

31 August 2020

Russ Allbery: Review: Men at Arms

Review: Men at Arms, by Terry Pratchett
Series: Discworld #15
Publisher: Harper
Copyright: 1993
Printing: November 2013
ISBN: 0-06-223740-3
Format: Mass market
Pages: 420
Men at Arms is the fifteenth Discworld novel and a direct plot sequel to Guards! Guards!. You could start here without missing too much, but starting with Guards! Guards! would make more sense. And of course there are cameos (and one major appearance) by other characters who are established in previous books. Carrot, the adopted dwarf who joined the watch in Guards! Guards!, has been promoted to corporal. He is now in charge of training new recruits, a role that is more important because of the Night Watch's new Patrician-ordered diversity initiative. The Watch must reflect the ethnic makeup of the city. That means admitting a troll, a dwarf... and a woman? Trolls and dwarfs hate each other because dwarfs mine precious things out of rock and trolls are composed of precious things embedded in rocks, so relations between the new recruits are tense. Captain Vimes is leaving the Watch, and no one is sure who would or could replace him. (The reason for this is a minor spoiler for Guards! Guards!) A magical weapon is stolen from the Assassin's Guild. And a string of murders begins, murders that Vimes is forbidden by Lord Vetinari from investigating and therefore clearly is going to investigate. This is an odd moment at which to read this book. The Night Watch are not precisely a police force, although they are moving in that direction. Their role in Ankh-Morpork is made much stranger by the guild system, in which the Thieves' Guild is responsible for theft and for dealing with people who steal outside of the quota of the guild. But Men at Arms is in part a story about ethics, about what it means to be a police officer, and about what it looks like when someone is very good at that job. Since I live in the United States, that makes it hard to avoid reading Men at Arms in the context of the current upheavals about police racism, use of force, and lack of accountability. Men at Arms can indeed be read that way; community relations, diversity in the police force, the merits of making two groups who hate each other work together, and the allure of violence are all themes Pratchett is working with in this novel. But they're from the perspective of a UK author writing in 1993 about a tiny city guard without any of the machinery of modern police, so I kept seeing a point of clear similarity and then being slightly wrong-footed by the details. It also felt odd to read a book where the cops are the heroes, much in the style of a detective show. This is in no way a problem with the book, and in a way it was helpful perspective, but it was a strange reading experience.
Cuddy had only been a guard for a few days but already he had absorbed one important and basic fact: it is almost impossible for anyone to be in a street without breaking the law.
Vimes and Carrot are both excellent police officers, but in entirely different ways. Vimes treats being a cop as a working-class job and is inclined towards glumness and depression, but is doggedly persistent and unable to leave a problem alone. His ethics are covered by a thick layer of world-weary cynicism. Carrot is his polar opposite in personality: bright, endlessly cheerful, effortlessly charismatic, and determined to get along with everyone. On first appearance, this contrast makes Vimes seem wise and Carrot seem a bit dim. That is exactly what Pratchett is playing with and undermining in Men at Arms. Beneath Vimes's cynicism, he's nearly as idealistic as Carrot, even though he arrives at his ideals through grim contrariness. Carrot, meanwhile, is nowhere near as dim as he appears to be. He's certain about how he wants to interact with others and is willing to stick with that approach no matter how bad of an idea it may appear to be, but he's more self-aware than he appears. He and Vimes are identical in the strength of their internal self-definition. Vimes shows it through the persistent, grumpy stubbornness of a man devoted to doing an often-unpleasant job, whereas Carrot verbally steamrolls people by refusing to believe they won't do the right thing.
Colon thought Carrot was simple. Carrot often struck people as simple. And he was. Where people went wrong was thinking that simple meant the same thing as stupid.
There's a lot going on in this book apart from the profiles of two very different models of cop. Alongside the mystery (which doubles as pointed commentary on the corrupting influence of violence and personal weaponry), there's a lot about dwarf/troll relations, a deeper look at the Ankh-Morpork guilds (including a horribly creepy clown guild), another look at how good Lord Vetinari is at running the city by anticipating how other people will react, a sarcastic dog named Gaspode (originally seen in Moving Pictures), and Pratchett's usual collection of memorable lines. It is also the origin of the now-rightfully-famous Vimes boots theory:
The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money. Take boots, for example. He earned thirty-eight dollars a month plus allowances. A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. Those were the kind of boots Vimes always bought, and wore until the soles were so thin that he could tell where he was in Ankh-Morpork on a foggy night by the feel of the cobbles. But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that'd still be keeping his feet dry in ten years' time, while the poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet. This was the Captain Samuel Vimes 'Boots' theory of socioeconomic unfairness.
Men at Arms regularly makes lists of the best Discworld novels, and I can see why. At this point in the series, Pratchett has hit his stride. The plots have gotten deeper and more complex without losing the funny moments, movie and book references, and glorious turns of phrase. There is also a lot of life philosophy and deep characterization when one pays close attention to the characters.
He was one of those people who would recoil from an assault on strength, but attack weakness without mercy.
My one complaint is that I found it a bit overstuffed with both characters and subplots, and as a result had a hard time following the details of the plot. I found myself wanting a timeline of the murders or a better recap from one of the characters. As always with Pratchett, the digressions are wonderful, but they do occasionally come at the cost of plot clarity. I'm not sure I recommend the present moment in the United States as the best time to read this book, although perhaps there is no better time for Carrot and Vimes to remind us what good cops look like. But regardless of when one reads it, it's an excellent book, one of the best in the Discworld series to this point. Followed, in publication order, by Soul Music. The next Watch book is Feet of Clay. Rating: 8 out of 10

2 July 2020

Russell Coker: Desklab Portable USB-C Monitor

I just got a 15.6 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound. PC Use I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don t know why I didn t get 4K to work, my laptop has done 4K with other monitors. The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US. The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don t come close to matching what I ve measured. Power The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging. I can power my Desklab monitor from a USB battery, from my Thinkpad s USB port (even when the Thinkpad isn t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it s supposed to work, but in my experience it s rare to have new technology live up to it s potential at the start! One thing to note is that it doesn t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal. Phone Use The first thing to note is that the Desklab monitor won t work with all phones, whether a phone will take the option of an external display depends on it s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works. My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a desktop mode that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous. When desktop mode is enabled it s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn t seem to be a way to make it smaller. I d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that s not supported. It s quite easy to type on a keyboard that s slightly larger than a regular PC keyboard (a 15 display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn t viable. I wrote the first draft of this post on my phone using the Desklab display. It s not nearly as easy as writing on a laptop but much easier than writing on the phone screen. Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don t think they have touch screens and therefore can t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices. One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes). Caveats When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I ll be buying a 13 monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6 portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways. Netflix doesn t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it s a pity that Netflix is difficult. The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

15 June 2020

Mark Brown: Book Club: Zettlekasten

Recently I was part of a call with Daniel and Lars to discuss Zettelkasten, a system for building up a cross-referenced archive of notes to help with research and study that has been getting a lot of discussion recently, the key thing being the building of links between ideas. Tomas Vik provided an overview of the process that we all found very helpful, and the information vs knowledge picture in Eugene Yan s blog on the topic (by @gapingvoid) really helped us crystalize the goals. It s not at all new and as Lars noted has a lot of similarities with a wikis in terms of what it produces but it couples this with an emphasis on the process and constant generation of new entries which Daniel found similar to some of the Getting Things Done recommendations. We all liked the emphasis on constant practice and how that can help build skills around effective note taking, clear writing and building links between ideas. Both Daniel and Lars already have note taking practicies that they find useful, combinations of journalling and building up collections of notes of learnings over time, and felt that there could be value in integrating aspects of Zettelkasten into these practices so we talked quite a bit about how that could be done. There was a consensus that journalling is useful so the main idea we had was to keep maintaining the journal, using that as an inbox and setting aside time to write entries into a Zettelkasten. This is also a useful way to approach recording things when away from a computer, taking notes and then writing them up later. Daniel suggested that one way to migrate existing notes might be to simply start anew, moving things over from old notes as required and then after a suitably long period (for example a year) review anything that was left and migrate anything that was left. We were all concerned about the idea of using any of the non-free solutions for something that is intended to be used long term, especially where the database isn t in an easily understood format. Fortunately there are free software tools like Zettlr which seem to address these concerns well. This was a really useful discussion, it really helps to bounce ideas off each other and this was certainly an interesting topic to learn about with some good ideas which will hopefully be helpful to us.

Russ Allbery: Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.) Yarimar Bonilla & Marisol LeBr n (ed.) Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) Policing the Planet (non-fiction anthology)
Zachary D. Carter The Price of Peace (non-fiction)
Justin Akers Chac n & Mike Davis No One is Illegal (non-fiction)
Grace Chang Disposable Domestics (non-fiction)
Suzanne Collins The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis Freedom is a Constant Struggle (non-fiction)
Danny Katch Socialism... Seriously (non-fiction)
Naomi Klein The Battle for Paradise (non-fiction)
Naomi Klein No is Not Enough (non-fiction)
Naomi Kritzer Catfishing on CatNet (sff)
Derek K nsken The Quantum Magician (sff)
Rob Larson Bit Tyrants (non-fiction)
Michael L wy Ecosocialism (non-fiction)
Joe Macar , Maya Schenwar, et al. (ed.) Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi Riot Baby (sff)
Sarah Pinsker A Song for a New Day (sff)
Lina Rather Sisters of the Vast Black (sff)
Marta Russell Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) How We Get Free (non-fiction anthology)
Linda Tirado Hand to Mouth (non-fiction)
Alex S. Vitale The End of Policing (non-fiction)
C.M. Waggoner Unnatural Magic (sff)
Martha Wells Network Effect (sff)
Kai Ashante Wilson Sorcerer of the Wildeeps (sff)

11 June 2020

Antoine Beaupr : CVE-2020-13777 GnuTLS audit: be scared

So CVE-2020-13777 came out while I wasn't looking last week. The GnuTLS advisory (GNUTLS-SA-2020-06-03) is pretty opaque so I'll refer instead to this tweet from @FiloSottile (Go team security lead):
PSA: don't rely on GnuTLS, please. CVE-2020-13777 Whoops, for the past 10 releases most TLS 1.0 1.2 connection could be passively decrypted and most TLS 1.3 connections intercepted. Trivially. Also, TLS 1.2 1.0 session tickets are awful.
You are reading this correctly: supposedly encrypted TLS connections made with affected GnuTLS releases are vulnerable to passive cleartext recovery attack (and active for 1.3, but who uses that anyways). That is extremely bad. It's pretty close to just switching everyone to HTTP instead of HTTPS, more or less. I would have a lot more to say about the security of GnuTLS in particular -- and security in general -- but I am mostly concerned about patching holes in the roof right now, so this article is not about that. This article is about figuring out what, exactly, was exposed in our infrastructure because of this.

Affected packages Assuming you're running Debian, this will show a list of packages that Depends on GnuTLS:
apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u
This assumes you run this only on hosts running Buster or above. Otherwise you'll need to figure out a way to pick machines running GnuTLS 3.6.4 or later. Note that this list only first level dependencies! It is perfectly possible that another package uses GnuTLS without being listed here. For example, in the above list I have libcurl3-gnutls, so the be really thorough, I would actually need to recurse down the dependency tree. On my desktop, this shows an "interesting" list of targets:
  • apt
  • cadaver - AKA WebDAV
  • curl & wget
  • fwupd - another attack on top of this one
  • git (through the libcurl3-gnutls dependency)
  • mutt - all your emails
  • weechat - your precious private chats
Arguably, fetchers like apt, curl, fwupd, and wget rely on HTTPS for "authentication" more than secrecy, although apt has its own OpenPGP-based authentication so that wouldn't matter anyways. Still, this is truly distressing. And I haven't mentioned here things like gobby, network-manager, systemd, and others - the scope of this is broad. Hell, even good old lynx links against GnuTLS. In our infrastructure, the magic command looks something like this:
cumin -o txt -p 0  'F:lsbdistcodename=buster' "apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u"   tee gnutls-rdepds-per-host   awk ' print $NF '   sort   uniq -c   sort -n
There, the result is even more worrisome, as those important packages seem to rely on GnuTLS for their transport security:
  • mariadb - all MySQL traffic and passwords
  • mandos - full disk encryption
  • slapd - LDAP passwords
mandos is especially distressing although it's probably not vulnerable because it seems it doesn't store the cleartext -- it's encrypted with the client's OpenPGP public key -- so the TLS tunnel never sees the cleartext either. Other reports have also mentioned the following servers link against GnuTLS and could be vulnerable:
  • exim
  • rsyslog
  • samba
  • various VNC implementations

Not affected Those programs are not affected by this vulnerability:
  • apache2
  • gnupg
  • python
  • nginx
  • openssh
This list is not exhaustive, naturally, but serves as an example of common software you don't need to worry about. The vulnerability only exists in GnuTLS, as far as we know, so programs linking against other libraries are not vulnerable. Because the vulnerability affects session tickets -- and those are set on the server side of the TLS connection -- only users of GnuTLS as a server are vulnerable. This means, for example, that while weechat uses GnuTLS, it will only suffer from the problem when acting as a server (which it does, in relay mode) or, of course, if the remote IRC server also uses GnuTLS. Same with apt, curl, wget, or git: it is unlikely to be a problem because it is only used as a client; the remote server is usually a webserver -- not git itself -- when using TLS.

Caveats Keep in mind that it's not because a package links against GnuTLS that it uses it. For example, I have been told that, on Arch Linux, if both GnuTLS and OpenSSL are available, the mutt package will use the latter, so it's not affected. I haven't confirmed that myself nor have I checked on Debian. Also, because it relies on session tickets, there's a time window after which the ticket gets cycled and properly initialized. But that is apparently 6 hours by default so it is going to protect only really long-lasting TLS sessions, which are uncommon, I would argue. My audit is limited. For example, it might have been better to walk the shared library dependencies directly, instead of relying on Debian package dependencies.

Other technical details It seems the vulnerability might have been introduced in this merge request, itself following a (entirely reasonable) feature request to make it easier to rotate session tickets. The merge request was open for a few months and was thoroughly reviewed by a peer before being merged. Interestingly, the vulnerable function (_gnutls_initialize_session_ticket_key_rotation), explicitly says:
 * This function will not enable session ticket keys on the server side. That is done
 * with the gnutls_session_ticket_enable_server() function. This function just initializes
 * the internal state to support periodical rotation of the session ticket encryption key.
In other words, it thinks it is not responsible for session ticket initialization, yet it is. Indeed, the merge request fixing the problem unconditionally does this:
memcpy(session->key.initial_stek, key->data, key->size);
I haven't reviewed the code and the vulnerability in detail, so take the above with a grain of salt. The full patch is available here. See also the upstream issue 1011, the upstream advisory, the Debian security tracker, and the Redhat Bugzilla.

Moving forward The impact of this vulnerability depends on the affected packages and how they are used. It can range from "meh, someone knows I downloaded that Debian package yesterday" to "holy crap my full disk encryption passwords are compromised, I need to re-encrypt all my drives", including "I need to change all LDAP and MySQL passwords". It promises to be a fun week for some people at least. Looking ahead, however, one has to wonder whether we should follow @FiloSottile's advice and stop using GnuTLS altogether. There are at least a few programs that link against GnuTLS because of the OpenSSL licensing oddities but that has been first announced in 2015, then definitely and clearly resolved in 2017 -- or maybe that was in 2018? Anyways it's fixed, pinky-promise-I-swear, except if you're one of those weirdos still using GPL-2, of course. Even though OpenSSL isn't the simplest and secure TLS implementation out there, it could preferable to GnuTLS and maybe we should consider changing Debian packages to use it in the future. But then again, the last time something like this happened, it was Heartbleed and GnuTLS wasn't affected, so who knows... It is likely that people don't have OpenSSL in mind when they suggest moving away from GnuTLS and instead think of other TLS libraries like mbedtls (previously known as PolarSSL), NSS, BoringSSL, LibreSSL and so on. Not that those are totally sinless either... "This is fine", as they say...

2 June 2020

Lisandro Dami n Nicanor P rez Meyer: Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa) - Fighting COVID-19

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.


This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

All the above done with Free/Libre/Open Source software and hardware designs. Any manufacturing company can then use them for mass production.

The importance of early pneumonia detection


We were already working in MoSimPa when an NYTimes article caught or attention: "The Infection That s Silently Killing Coronavirus Patients". From the article:

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint


As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

System schema

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware


As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software


Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

Between the database and MQTT is mosimpa-datakeeper. The piece of software from which medical staff monitor patients is unsurprisingly called mosimpa-monitor.

mosimpa-monitor
MoSimPa's monitor main screen

mosimpa-monitor plots
Plots of a patient's data


mosimpa-monitor-alarms-setup
Alarm thresholds setup


And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

mosimpa-abm
ABM main screen


mosimpa-abm-internments
ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

Even if I am a packager I still have some stuff to fix from the packaging itself, like letting datakeeper run with its own user. I just haven't got to it yet.



Certifications


We are working towards getting the system certified by ANMAT, which is the Argentinian equivalent for EEUU's FDA.

Funding


While all the people involved are working ad-honorem funding is still required in order to buy materials, create the prototypes, etc. The project created payments links with Mercado Pago (in spanish and argentinian pesos) and other bank methods (PDF, also in spanish).

I repeat the links here with an aproximation to US$.

- 500 AR$ (less than 8 US$)
- 1000 AR$ (less than 15 US$)
- 2000 AR$ (less than 30 US$)
- 3000 AR$ (less than 45 US$)
- 5000 AR$ (less than 75 US$)

You can check the actual convertion rate in https://www.google.com/search?q=argentine+peso+to+us+dollars

The project was also presented at a funding call of argentinian Agencia de Promoci n de la Investigaci n, el Desarrollo Tecnol gico y la Innovaci n (Agencia I+D+i). 900+ projects where presented and 64 funded, MoSimPa between them.

26 May 2020

Russ Allbery: Review: The Ten Thousand Doors of January

Review: The Ten Thousand Doors of January, by Alix E. Harrow
Publisher: Redhook
Copyright: September 2019
ISBN: 0-316-42198-7
Format: Kindle
Pages: 373
In 1901, at the age of seven, January found a Door. It was barely more than a frame in a ruined house in a field in Kentucky, but she wrote a story about opening it, and then did.
Once there was a brave and temeraryous (sp?) girl who found a Door. It was a magic Door that's why it has a capital D. She opened the Door.
The Door led to a bluff over the sea and above a city, a place very far from Kentucky, and she almost stayed, but she came back through the Door when her guardian, Mr. Locke, called. The adventure cost her a diary, several lectures, days of being locked in her room, and the remnants of her strained relationship with her father. When she went back, the frame of the Door was burned to the ground. That was the end of Doors for January for some time, and the continuation of a difficult childhood. She was cared for by her father's employer as a sort of exotic pet, dutifully attempting to obey, grateful for Mr. Locke's protection, and convinced that he was occasionally sneaking her presents through a box in the Pharaoh Room out of some hidden kindness. Her father appeared rarely, said little, and refused to take her with him. Three things helped: the grocery boy who smuggled her stories, an intimidating black woman sent by her father to replace her nurse, and her dog.
Once upon a time there was a good girl who met a bad dog, and they became the very best of friends. She and her dog were inseparable from that day forward.
I will give you a minor spoiler that I would have preferred to have had, since it would have saved me some unwarranted worry and some mental yelling at the author: The above story strains but holds. January's adventure truly starts the day before her seventeenth birthday, when she finds a book titled The Ten Thousand Doors in the box in the Pharaoh Room. As you may have guessed from the title, The Ten Thousand Doors of January is a portal fantasy, but it's the sort of portal fantasy that is more concerned with the portal itself than the world on the other side of it. (Hello to all of you out there who, like me, have vivid memories of the Wood between the Worlds.) It's a book about traveling and restlessness and the possibility of escape, about the ability to return home again, and about the sort of people who want to close those doors because the possibility of change created by people moving around freely threatens the world they have carefully constructed. Structurally, the central part of the book is told by interleaving chapters of January's tale with chapters from The Ten Thousand Doors. That book within a book starts with the framing of a scholarly treatment but quickly becomes a biography of a woman: Adelaide Lee Larson, a half-wild farm girl who met her true love at the threshold of a Door and then spent much of her life looking for him. I am not a very observant reader for plot details, particularly for books that I'm enjoying. I read books primarily for the emotional beats and the story structure, and often miss rather obvious story clues. (I'm hopeless at guessing the outcomes of mysteries.) Therefore, when I say that there are many things January is unaware of that are obvious to the reader, that's saying a lot. Even more clues were apparent when I skimmed the first chapter again, and a more observant reader would probably have seen them on the first read. Right down to Mr. Locke's name, Harrow is not very subtle about the moral shape of this world. That can make the early chapters of the book frustrating. January is being emotionally (and later physically) abused by the people who have power in her life, but she's very deeply trapped by false loyalty and lack of external context. Winning free of that is much of the story of the book, and at times it has the unpleasantness of watching someone make excuses for her abuser. At other times it has the unpleasantness of watching someone be abused. But this is the place where I thought the nested story structure worked marvelously. January escapes into the story of The Ten Thousand Doors at the worst moments of her life, and the reader escapes with her. Harrow uses the desire to switch scenes back to the more adventurous and positive story to construct and reinforce the emotional structure of the book. For me, it worked extremely well. It helps that the ending is glorious. The payoff is worth all the discomfort and tension-building in the first half of the book. Both The Ten Thousand Doors and the surrounding narrative reach deeply satisfying conclusions, ones that are entangled but separate in just the ways that they need to be. January's abilities, actions, and decisions at the end of the book were just the outcome that I needed but didn't entirely guess in advance. I could barely put down the last quarter of this story and loved every moment of the conclusion. This is the sort of book that can be hard to describe in a review because its merits don't rest on an original twist or easily-summarized idea. The elements here are all elements found in other books: portal fantasy, the importance of story-telling, coming of age, found family, irrepressible and indomitable characters, and the battle of the primal freedom of travel and discovery and belief against the structural forces that keep rulers in place. The merits of this book are in the small details: the way that January's stories are sparse and rare and sometimes breathtaking, the handling of tattoos, the construction of other worlds with a few deft strokes, and the way Harrow embraces the emotional divergence between January's life and Adelaide's to help the reader synchronize the emotional structure of their reading experience with January's.
She writes a door of blood and silver. The door opens just for her.
The Ten Thousand Doors of January is up against a very strong slate for both the Nebula and the Hugo this year, and I suspect it may be edged out by other books, although I wouldn't be unhappy if it won. (It probably has a better shot at the Nebula than the Hugo.) But I will be stunned if Harrow doesn't walk away with the Mythopoeic Award. This seems like exactly the type of book that award was created for. This is an excellent book, one of the best I've read so far this year. Highly recommended. Rating: 9 out of 10

12 May 2020

Daniel Silverstone: The Lars, Mark, and Daniel Club

Last night, Lars, Mark, and I discussed Jeremy Kun's The communicative value of using Git well post. While a lot of our discussion was spawned by the article, we did go off-piste a little, and I hope that my notes below will enlighten you all as to a bit of how we see revision control these days. It was remarkably pleasant to read an article where the comments section wasn't a cesspool of horror, so if this posting encourages you to go and read the article, don't stop when you reach the bottom -- the comments are good and useful too.
This was a fairly non-contentious article for us though each of us had points we wished to bring up and chat about it turned into a very convivial chat. We saw the main thrust of the article as being about using the metadata of revision control to communicate intent, process, and decision making. We agreed that it must be possible to do so effectively with Mercurial (thus deciding that the mention of it was simply a bit of clickbait / red herring) and indeed Mark figured that he was doing at least some of this kind of thing way back with CVS. We all discussed how knowing the fundamentals of Git's data model improved our ability to work wih the tool. Lars and I mentioned how jarring it has originally been to come to Git from revision control systems such as Bazaar (bzr) but how over time we came to appreciate Git for what it is. For Mark this was less painful because he came to Git early enough that there was little more than the fundamental data model, without much of the porcelain which now exists. One point which we all, though Mark in particular, felt was worth considering was that of distinguishing between published and unpublished changes. The article touches on it a little, but one of the benefits of the workflow which Jeremy espouses is that of using the revision control system as an integral part of the review pipeline. This is perhaps done very well with Git based workflows, but can be done with other VCSs. With respect to the points Jeremy makes regarding making commits which are good for reviewing, we had a general agreement that things were good and sensible, to a point, but that some things were missed out on. For example, I raised that commit messages often need to be more thorough than one-liners, but Jeremy's examples (perhaps through expedience for the article?) were all pretty trite one-liners which perhaps would have been entirely obvious from the commit content. Jeremy makes the point that large changes are hard to review, and Lars poined out that Greg Wilson did research in this area, and at least one article mentions 200 lines as a maximum size of a reviewable segment. I had a brief warble at this point about how reviewing needs to be able to consider the whole of the intended change (i.e. a diff from base to tip) not just individual commits, which is also missing from Jeremy's article, but that such a review does not need to necessarily be thorough and detailed since the commit-by-commit review remains necessary. I use that high level diff as a way to get a feel for the full shape of the intended change, a look at the end-game if you will, before I read the story of how someone got to it. As an aside at this point, I talked about how Jeremy included a 'style fixes' commit in his example, but I loathe seeing such things and would much rather it was either not in the series because it's unrelated to it; or else the style fixes were folded into the commits they were related to. We discussed how style rules, as well as commit-bisectability, and other rules which may exist for a codebase, the adherence to which would form part of the checks that a code reviewer may perform, are there to be held to when they help the project, and to be broken when they are in the way of good communication between humans. In this, Lars talked about how revision control histories provide high level valuable communication between developers. Communication between humans is fraught with error and the rules are not always clear on what will work and what won't, since this depends on the communicators, the context, etc. However whatever communication rules are in place should be followed. We often say that it takes two people to communicate information, but when you're writing commit messages or arranging your commit history, the second party is often nebulous "other" and so the code reviewer fulfils that role to concretise it for the purpose of communication. At this point, I wondered a little about what value there might be (if any) in maintaining the metachanges (pull request info, mailing list discussions, etc) for historical context of decision making. Mark suggested that this is useful for design decisions etc but not for the style/correctness discussions which often form a large section of review feedback. Maybe some of the metachange tracking is done automatically by the review causing the augmentation of the changes (e.g. by comments, or inclusion of design documentation changes) to explain why changes are made. We discussed how the "rebase always vs. rebase never" feeling flip-flopped in us for many years until, as an example, what finally won Lars over was that he wants the history of the project to tell the story, in the git commits, of how the software has changed and evolved in an intentional manner. Lars said that he doesn't care about the meanderings, but rather about a clear story which can be followed and understood. I described this as the switch from "the revision history is about what I did to achieve the goal" to being more "the revision history is how I would hope someone else would have done this". Mark further refined that to "The revision history of a project tells the story of how the project, as a whole, chose to perform its sequence of evolution." We discussed how project history must necessarily then contain issue tracking, mailing list discussions, wikis, etc. There are exist free software projects where part of their history is forever lost because, for example, the project moved from Sourceforge to Github, but made no effort (or was unable) to migrate issues or somesuch. Linkages between changes and the issues they relate to can easily be broken, though at least with mailing lists you can often rewrite URLs if you have something consistent like a Message-Id. We talked about how cover notes, pull request messages, etc. can thus also be lost to some extent. Is this an argument to always use merges whose message bodies contain those details, rather than always fast-forwarding? Or is it a reason to encapsulate all those discussions into git objects which can be forever associated with the changes in the DAG? We then diverted into discussion of CI, testing every commit, and the benefits and limitations of automated testing vs. manual testing; though I think that's a little too off-piste for even this summary. We also talked about how commit message audiences include software perhaps, with the recent movement toward conventional commits and how, with respect to commit messages for machine readability, it can become very complex/tricky to craft good commit messages once there are multiple disparate audiences. For projects the size of the Linux kernel this kind of thing would be nearly impossible, but for smaller projects, perhaps there's value. Finally, we all agreed that we liked the quote at the end of the article, and so I'd like to close out by repeating it for you all... Hal Abelson famously said:
Programs must be written for people to read, and only incidentally for machines to execute.
Jeremy agrees, as do we, and extends that to the metacommit information as well.

Next.

Previous.